Goto

Collaborating Authors

 West Linn


Service Robots Roll Forward

Communications of the ACM

History is filled with examples of robotic devices designed to reduce, eliminate, or improve upon human labor. From washing machines to roaming vacuum cleaners, various machines have transformed the way we work and live. Today, far more sophisticated service robots are wheeling into the picture, aiming to take humans out of the labor loop and, in the process, improve the speed and efficiency of interactions. They can carry plates between a restaurant kitchen and diners' tables, deliver a toothbrush to someone on the 28th floor of a hotel, and ensure a hospital patient receives her medications on time. They also are adept at stocking shelves, taking orders at a fast food restaurant, and serving as emotional companions.


Distilling What We Know

Communications of the ACM

The sheer size and complexity of today's generative pretrained transformer (GPT) models is nothing less than astounding. OpenAI's GPT-3, for example, possesses somewhere in the neighborhood of 175 billion parameters, and there is speculation GPT-4 could have as many as 10 trillion parameters.a All of this introduces enormous overhead in terms of required cloud resources, including compute cycles and energy consumption. At the moment, the computer power required to train state-of-the-art artificial intelligence (AI) models is rising at a rate of 15x every two years.b The cost of training a large GPT model can run into the millions of dollars.c


3D Modeling Draws on AI

Communications of the ACM

Graphics rendering has always revolved around a basic premise: faster performance equals a better experience. Of course, graphics processing units (GPUs) that render the complex three-dimensional (3D) images used in video games, augmented reality, and virtual reality can push visual performance only so far before reaching a hardware ceiling. All this has led researchers down the path of artificial intelligence--including the use of neural nets--to unlock speed and quality improvements in 3D graphics. In 2022, for example, Nvidia introduced DLSS 3 (Deep Learning Super Sampling), a neural graphics engine that boosts rendering speed by as much as 530%.a The technology uses machine learning to predict which pixels can be created on the fly using the GPU.


Better Algorithms through Faster Math

Communications of the ACM

Developing faster algorithms is an important but elusive goal for data scientists. The ability to accelerate complex computing tasks and reduce latency has far-reaching ramifications in areas such as natural language processing, video streaming, autonomous robotics, gaming, and extended reality. Yet for all the hype surrounding computer algorithms and the increasingly sophisticated ways they operate, a basic fact stands out: these algorithms are typically built atop matrix multiplication, a basic type of linear algebra. The underlying mathematical framework has not changed a great deal since the inception of computing--and finding more efficient formulas has proved elusive. It is an issue attracting growing attention--particularly as machine learning (ML), deep learning (DL), artificial intelligence (AI), and machine automation advance into the mainstream.


AI Rewrites Coding

Communications of the ACM

It runs factories, controls transportation networks, and defines the way we interact with personal devices. It is estimated that somewhere in the neighborhood of 2.8 trillion lines of code have been written over the last two decades alone.a Yet it is easy to overlook a basic fact: people have to write software--and that is often a long, tedious, and error-prone process. Although low-code and no-code environments have simplified things--and even allowed non-data scientists to build software through drag-and-drop interfaces--they still require considerable time and effort. Over the last several years, various systems and frameworks have appeared that can automate code generation.



Hidden Malware Ratchets Up Cybersecurity Risks

Communications of the ACM

Chaganti, R., Vinayakumar, R., Alazab, M., and Pham, T.D. Stegomalware: A Systematic Survey of Malware Hiding and Detection in Images, Machine Learning Models and Research Challenges, Cornell University, October 6, 2021.



Immersion Cooling Heats Up

Communications of the ACM

Depending on climate conditions, the availability of renewables and other factors, immersion cooling can make a profound difference in both energy consumption and costs.


Can AI Learn to Forget?

Communications of the ACM

Machine learning has emerged as a valuable tool for spotting patterns and trends that might otherwise escape humans. The technology, which can build elaborate models based on everything from personal preferences to facial recognition, is used widely to understand behavior, spot patterns and trends, and make informed predictions. Yet for all the gains, there is also plenty of pain. A major problem associated with machine learning is that once an algorithm or model exists, expunging individual records or chunks of data is extraordinarily difficult. In most cases, it is necessary to retrain the entire model--sometimes with no assurance that that model will not continue to incorporate the suspect data in some way, says Gautam Kamath, an assistant professor in the David R. Cheriton School of Computer Science at the University of Waterloo in Canada.